1 Load packages

2 Helper functions

3 Experiment 1a

3.3 Stats

3.3.1 Overall logistic regression

term estimate std.error statistic p.value
(Intercept) 2.94 1.03 2.87 0.00
age_group7 16.62 2404.67 0.01 0.99
age_group9 16.62 2404.67 0.01 0.99
conditionD -2.54 1.12 -2.26 0.02
age_group7:conditionD -17.23 2404.67 -0.01 0.99
age_group9:conditionD -16.83 2404.67 -0.01 0.99

3.3.2 Logistic regression in overdetermined condition

term estimate std.error statistic p.value
(Intercept) 0.41 0.46 0.89 0.37
age_group7 -0.61 0.64 -0.95 0.34
age_group9 -0.20 0.64 -0.32 0.75

3.3.3 Binomial exact test

estimate statistic p.value parameter conf.low conf.high method alternative
0.53 32 0.7 60 0.4 0.66 Exact binomial test two.sided

4 Experiment 1b

4.2 Demographics

age_group n
5 21
7 26

4.3 Stats

#> `summarise()` ungrouping output (override with `.groups` argument)
age_group n n_correct_disjunctive n_correct_conjunctive
5 21 21 21
7 26 26 26

5 Experiment 2

5.3 Stats

5.3.2 Mixed Model Accuracy Analysis

To analyse accuracy as a function of age group, the actual outcome, and the counterfactual outcome, we first need to prepare a new data set.

#> `summarise()` regrouping output by 'participant', 'age_group', 'outcome_actual' (override with `.groups` argument)

We then fit a series of models (or read the fitted model files). mod0 is the maximal model, mod3 is the final model.

The full model shows a main effect of age group plus an interaction of age group with the counterfactual outcome, but also some convergence warnings (for the full model).

#> Warning: lme4 reported (at least) the following warnings for 'full':
#>   * Model failed to converge with max|grad| = 0.0181666 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group':
#>   * Model failed to converge with max|grad| = 0.012805 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'outcome_actual':
#>   * Model failed to converge with max|grad| = 0.0466433 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0417475 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_actual':
#>   * Model failed to converge with max|grad| = 0.0852291 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0635005 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'outcome_actual:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0737963 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_actual:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0478644 (tol = 0.002, component 1)
#> Mixed Model Anova Table (Type 3 tests, LRT-method)
#> 
#> Model: prop ~ age_group * outcome_actual * outcome_counterfactual + 
#> Model:     (outcome_actual * outcome_counterfactual || participant)
#> Data: df2
#> Df full model: 16
#>                                            Effect df   Chisq p.value
#> 1                                       age_group  2  8.80 *    .012
#> 2                                  outcome_actual  1    0.44    .509
#> 3                          outcome_counterfactual  1    1.20    .272
#> 4                        age_group:outcome_actual  2    4.57    .102
#> 5                age_group:outcome_counterfactual  2 9.52 **    .009
#> 6           outcome_actual:outcome_counterfactual  1    1.17    .279
#> 7 age_group:outcome_actual:outcome_counterfactual  2    0.48    .786
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1

We consequently remove the random slope for the interaction. This model shows the same pattern of results, but also convergence warnings:

#> Warning: lme4 reported (at least) the following warnings for 'full':
#>   * Model failed to converge with max|grad| = 0.0146004 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group':
#>   * boundary (singular) fit: see ?isSingular
#> Warning: lme4 reported (at least) the following warnings for 'outcome_actual':
#>   * Model failed to converge with max|grad| = 0.0261344 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0621988 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_actual':
#>   * Model failed to converge with max|grad| = 0.0661235 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0578913 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'outcome_actual:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0362014 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_actual:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.022484 (tol = 0.002, component 1)
#> Mixed Model Anova Table (Type 3 tests, LRT-method)
#> 
#> Model: prop ~ age_group * outcome_actual * outcome_counterfactual + 
#> Model:     (outcome_actual + outcome_counterfactual || participant)
#> Data: df2
#> Df full model: 15
#>                                            Effect df   Chisq p.value
#> 1                                       age_group  2  8.80 *    .012
#> 2                                  outcome_actual  1    0.44    .509
#> 3                          outcome_counterfactual  1    1.20    .272
#> 4                        age_group:outcome_actual  2    4.57    .102
#> 5                age_group:outcome_counterfactual  2 9.52 **    .009
#> 6           outcome_actual:outcome_counterfactual  1    1.17    .280
#> 7 age_group:outcome_actual:outcome_counterfactual  2    0.48    .786
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1

Inspection of the variance estimates shows that the one for the counterfactual outcome is smallest.

#>  Groups        Name                        Std.Dev. 
#>  participant   (Intercept)                 1.8854359
#>  participant.1 re1.outcome_actual1         0.0042371
#>  participant.2 re1.outcome_counterfactual1 0.0028784

We remove this in the next step, but the model still shows convergence warnings:

#> Warning: lme4 reported (at least) the following warnings for 'full':
#>   * Model failed to converge with max|grad| = 0.0510619 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group':
#>   * Model failed to converge with max|grad| = 0.0108255 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'outcome_actual':
#>   * Model failed to converge with max|grad| = 0.0301588 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.00309515 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_actual':
#>   * Model failed to converge with max|grad| = 0.0442321 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0472595 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'outcome_actual:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0600451 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_actual:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0124133 (tol = 0.002, component 1)
#> Mixed Model Anova Table (Type 3 tests, LRT-method)
#> 
#> Model: prop ~ age_group * outcome_actual * outcome_counterfactual + 
#> Model:     (outcome_actual || participant)
#> Data: df2
#> Df full model: 14
#>                                            Effect df   Chisq p.value
#> 1                                       age_group  2  8.79 *    .012
#> 2                                  outcome_actual  1    0.43    .510
#> 3                          outcome_counterfactual  1    1.20    .273
#> 4                        age_group:outcome_actual  2    4.57    .102
#> 5                age_group:outcome_counterfactual  2 9.59 **    .008
#> 6           outcome_actual:outcome_counterfactual  1    1.17    .280
#> 7 age_group:outcome_actual:outcome_counterfactual  2    0.48    .787
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1

We consequently remove the last remaining random slope as well. This still shows some warning, but no more for the full model. The pattern of results remains the same.

#> Warning: lme4 reported (at least) the following warnings for 'outcome_actual':
#>   * Model failed to converge with max|grad| = 0.0058044 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_actual':
#>   * Model failed to converge with max|grad| = 0.00416424 (tol = 0.002, component 1)
#> Warning: lme4 reported (at least) the following warnings for 'age_group:outcome_counterfactual':
#>   * Model failed to converge with max|grad| = 0.0108411 (tol = 0.002, component 1)
#> Mixed Model Anova Table (Type 3 tests, LRT-method)
#> 
#> Model: prop ~ age_group * outcome_actual * outcome_counterfactual + 
#> Model:     (1 | participant)
#> Data: df2
#> Df full model: 13
#>                                            Effect df   Chisq p.value
#> 1                                       age_group  2  8.80 *    .012
#> 2                                  outcome_actual  1    0.44    .509
#> 3                          outcome_counterfactual  1    1.20    .273
#> 4                        age_group:outcome_actual  2    4.57    .102
#> 5                age_group:outcome_counterfactual  2 9.59 **    .008
#> 6           outcome_actual:outcome_counterfactual  1    1.17    .280
#> 7 age_group:outcome_actual:outcome_counterfactual  2    0.48    .786
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '+' 0.1 ' ' 1

We can now investigate the interaction of age group and the counterfactual outcome. For this we use the final model.

#> Aggregating data over: participant
#> NOTE: Results may be misleading due to involvement in interactions

This shows an age effect if the counterfactual outcome is the same (i.e., over-determined cases), but not if it is different (i.e., singly-determined cases). This is also confirmed by follow-up tests. First for the final modeL

#> NOTE: Results may be misleading due to involvement in interactions
#> outcome_counterfactual = different:
#>  contrast estimate    SE  df z.ratio p.value
#>  4 - 5       0.914 0.789 Inf  1.159  0.4780 
#>  4 - 6      -0.767 0.776 Inf -0.988  0.5842 
#>  5 - 6      -1.681 0.818 Inf -2.055  0.0994 
#> 
#> outcome_counterfactual = same:
#>  contrast estimate    SE  df z.ratio p.value
#>  4 - 5      -0.763 0.684 Inf -1.115  0.5048 
#>  4 - 6      -2.607 0.717 Inf -3.637  0.0008 
#>  5 - 6      -1.844 0.706 Inf -2.610  0.0246 
#> 
#> Results are averaged over the levels of: outcome_actual 
#> Results are given on the log odds ratio (not the response) scale. 
#> P value adjustment: tukey method for comparing a family of 3 estimates

And for the maximal model:

#> NOTE: Results may be misleading due to involvement in interactions
#> outcome_counterfactual = different:
#>  contrast estimate    SE  df z.ratio p.value
#>  4 - 5       0.911 0.789 Inf  1.154  0.4810 
#>  4 - 6      -0.768 0.776 Inf -0.990  0.5834 
#>  5 - 6      -1.679 0.818 Inf -2.051  0.1002 
#> 
#> outcome_counterfactual = same:
#>  contrast estimate    SE  df z.ratio p.value
#>  4 - 5      -0.763 0.685 Inf -1.115  0.5049 
#>  4 - 6      -2.608 0.717 Inf -3.635  0.0008 
#>  5 - 6      -1.844 0.707 Inf -2.608  0.0247 
#> 
#> Results are averaged over the levels of: outcome_actual 
#> Results are given on the log odds ratio (not the response) scale. 
#> P value adjustment: tukey method for comparing a family of 3 estimates

Do we see an effect of “wishful thinking” such that children in our study were more likely to be correct on trials for which the ball could have gone into the goal but did not (i.e., an actual outcome by counterfactual outcome interaction)? No.

#> NOTE: Results may be misleading due to involvement in interactions
#>  outcome_actual outcome_counterfactual  prob     SE  df asymp.LCL asymp.UCL
#>  goal           different              0.344 0.0875 Inf     0.197     0.529
#>  miss           different              0.318 0.0880 Inf     0.174     0.508
#>  goal           same                   0.346 0.0735 Inf     0.218     0.500
#>  miss           same                   0.461 0.0809 Inf     0.311     0.618
#> 
#> Results are averaged over the levels of: age_group 
#> Confidence level used: 0.95 
#> Intervals are back-transformed from the logit scale

5.3.3 MPT analysis

We first need to prepare the data for fitting.

We then fit the full model. Specifically, we fit one model to the data of all age groups, but add age_group as a covariate to all three model parameters. This is somewhat time intensive so we save the results (or load the saved results if those exist).

We first check for chain convergence (i.e., Rhat and neff) and get an overview:

#> Call: 
#> traitMPT(eqnfile = "model.eqn", data = di, restrictions = list("g1=g2=0.5"), 
#>     covData = dc, predStructure = list("m_o m_t s ; age"), predType = "f", 
#>     n.iter = 320000, n.adapt = 1e+05, n.burnin = 20000, n.thin = 300, 
#>     n.chains = 4)
#> 
#> Group-level medians of MPT parameters (probability scale):
#>           Mean    SD  2.5%   50% 97.5% Time-series SE n.eff  Rhat R_95%
#> mean_m_o 0.984 0.019 0.932 0.991 1.000          0.000  3360 1.004 1.006
#> mean_m_t 0.550 0.261 0.062 0.575 0.966          0.004  3691 1.002 1.006
#> mean_s   0.540 0.105 0.306 0.550 0.720          0.002  4007 1.001 1.002
#> 
#> Mean/Median of latent-trait values (probit-scale) across individuals:
#>                Mean    SD   2.5%   50% 97.5% Time-series SE n.eff  Rhat R_95%
#> latent_mu_m_o 2.390 0.523  1.489 2.348 3.545          0.009  3628 1.000 1.000
#> latent_mu_m_t 0.163 0.858 -1.542 0.190 1.820          0.014  3674 1.002 1.007
#> latent_mu_s   0.103 0.276 -0.507 0.126 0.583          0.004  4007 1.001 1.002
#> 
#> Standard deviation of latent-trait values (probit scale) across individuals:
#>                   Mean    SD  2.5%   50%  97.5% Time-series SE n.eff  Rhat
#> latent_sigma_m_o 1.232 0.724 0.092 1.173  2.805          0.012  3512 1.002
#> latent_sigma_m_t 7.558 4.070 3.039 6.566 17.783          0.094  1879 1.018
#> latent_sigma_s   1.479 0.516 0.741 1.389  2.774          0.009  3629 1.000
#>                  R_95%
#> latent_sigma_m_o 1.006
#> latent_sigma_m_t 1.030
#> latent_sigma_s   1.001
#> 
#> Correlations of latent-trait values on probit scale:
#>                Mean    SD   2.5%    50% 97.5% Time-series SE n.eff  Rhat R_95%
#> rho[m_o,m_t] -0.245 0.392 -0.885 -0.276 0.609          0.009  2060 1.001 1.004
#> rho[m_o,s]    0.168 0.405 -0.653  0.191 0.829          0.008  2469 1.000 1.001
#> rho[m_t,s]    0.276 0.257 -0.248  0.293 0.725          0.004  3624 1.000 1.002
#> 
#> Correlations (posterior mean estimates) in matrix form:
#>        m_o    m_t     s
#> m_o  1.000 -0.245 0.168
#> m_t -0.245  1.000 0.276
#> s    0.168  0.276 1.000
#> 
#> 
#> ##############
#>  Model fit statistics (posterior predictive p-values):
#> Use PPP(fittedModel) to get T1 and T2 posterior predictive checks.
#> 
#> Effects of factors on latent scale (additive shift from overall mean):
#>                     Mean    SD   2.5%    50%  97.5% Time-series SE n.eff  Rhat
#> factor_m_o_age[1]  0.037 0.589 -1.082  0.012  1.288          0.010  3563 1.001
#> factor_m_o_age[2] -0.448 0.588 -1.642 -0.447  0.763          0.010  3618 1.001
#> factor_m_o_age[3]  0.411 0.596 -0.643  0.370  1.713          0.010  3775 1.001
#> factor_m_t_age[1] -1.321 1.701 -5.647 -0.982  0.986          0.027  3854 1.006
#> factor_m_t_age[2] -0.523 1.355 -3.683 -0.389  1.929          0.021  4000 1.004
#> factor_m_t_age[3]  1.844 1.892 -0.561  1.421  6.510          0.032  3490 1.007
#> factor_s_age[1]_4 -0.661 0.368 -1.498 -0.623 -0.031          0.006  3601 1.001
#> factor_s_age[2]_5  0.040 0.349 -0.662  0.043  0.752          0.005  4550 1.001
#> factor_s_age[3]_6  0.621 0.360  0.023  0.584  1.404          0.006  4078 1.000
#>                   R_95%
#> factor_m_o_age[1] 1.001
#> factor_m_o_age[2] 1.004
#> factor_m_o_age[3] 1.003
#> factor_m_t_age[1] 1.009
#> factor_m_t_age[2] 1.007
#> factor_m_t_age[3] 1.014
#> factor_s_age[1]_4 1.005
#> factor_s_age[2]_5 1.004
#> factor_s_age[3]_6 1.003
#> 
#> Factor SD on latent scale:
#>                    Mean    SD  2.5%   50% 97.5% Time-series SE n.eff  Rhat
#> SD_factor_m_o_age 1.207 1.049 0.410 0.918 3.637          0.017  3890 1.016
#> SD_factor_m_t_age 2.493 2.860 0.476 1.673 9.566          0.049  3345 1.018
#> SD_factor_s_age   1.177 1.071 0.423 0.926 3.319          0.017  3910 1.035
#>                   R_95%
#> SD_factor_m_o_age 1.018
#> SD_factor_m_t_age 1.022
#> SD_factor_s_age   1.036

We also check the trace plots for the (group-level) mean parameters:

The group-level SD parameters (on probit scale):

The correlation parameters among the individual effects:

And finally the parameters for the effect of age on the model parameter (as well as their SDs).

5.3.3.1 Model Fit

We evaluate model fit using posterior predictive tests. There is no evidence for misfit. All posterior predictive \(p\)-values are larger than \(.05\). This suggests that the model is able to adequately describe the data.

#>  ## Mean structure (T1):
#>  Observed =  0.07636708 ; Predicted =  0.03366389 ; p-value =  0.145 
#> 
#> ## Covariance structure (T2):
#>  Observed =  0.5087063 ; Predicted =  0.2788794 ; p-value =  0.139 
#> 
#> ## Individual fit (T1):
#>     1     2     3     4     5     6     7     8     9    10    11    12    13 
#> 0.426 0.743 0.752 0.423 0.299 0.695 0.075 0.743 0.292 0.327 0.712 0.386 0.287 
#>    14    15    16    17    18    19    20    21    22    23    24    25    26 
#> 0.414 0.295 0.255 0.281 0.120 0.237 0.430 0.245 0.453 0.424 0.290 0.334 0.253 
#>    27    28    29    30    31    32    33    34    35    36    37    38    39 
#> 0.420 0.286 0.264 0.301 0.339 0.164 0.461 0.681 0.726 0.431 0.385 0.116 0.809 
#>    40    41    42    43    44    45    46    47    48    49    50    51    52 
#> 0.547 0.698 0.303 0.281 0.787 0.451 0.279 0.286 0.371 0.338 0.759 0.247 0.731 
#>    53    54    55    56    57    58    59    60    61    62    63    64    65 
#> 0.394 0.309 0.382 0.723 0.471 0.371 0.244 0.228 0.495 0.256 0.219 0.238 0.076 
#>    66    67    68    69    70    71    72 
#> 0.527 0.402 0.311 0.466 0.304 0.492 0.402

This is also visible when plotting the distribution of individual \(ppp\)-values.

5.3.3.2 Group-Level Estimates

We can investigate the group-level parameter estimates (peaks) and 80% HDIs in tabular form:

#> Joining, by = c(".chain", ".iteration", ".draw", "parameter")
#> # A tibble: 9 x 8
#> # Groups:   parameter [3]
#>   parameter    age_group estimate .lower .upper .width .point .interval
#>   <fct>        <fct>        <dbl>  <dbl>  <dbl>  <dbl> <chr>  <chr>    
#> 1 italic(s)    4           0.304   0.12   0.491    0.8 mode   hdci     
#> 2 italic(s)    5           0.595   0.365  0.794    0.8 mode   hdci     
#> 3 italic(s)    6           0.786   0.634  0.914    0.8 mode   hdci     
#> 4 italic(m[o]) 4           0.997   0.961  1.00     0.8 mode   hdci     
#> 5 italic(m[o]) 5           0.993   0.904  1.00     0.8 mode   hdci     
#> 6 italic(m[o]) 6           0.999   0.984  1        0.8 mode   hdci     
#> 7 italic(m[t]) 4           0.0227  0      0.635    0.8 mode   hdci     
#> 8 italic(m[t]) 5           0.0376  0      0.804    0.8 mode   hdci     
#> 9 italic(m[t]) 6           0.991   0.702  1        0.8 mode   hdci

To investigate the difference between age groups, we calculate the difference distribution of the group-level posterior for each pairwise comparison of age groups for each parameter. We first look at the 80% and 95% highest density intervals of the difference distributions.

#> # A tibble: 18 x 7
#>    inter         estimate  .lower .upper .width .point .interval
#>    <fct>            <dbl>   <dbl>  <dbl>  <dbl> <chr>  <chr>    
#>  1 m_o:6 - m_o:4   0.0150 -0.0197 0.0444   0.8  mean   hdci     
#>  2 m_o:6 - m_o:5   0.0465 -0.0217 0.0972   0.8  mean   hdci     
#>  3 m_o:5 - m_o:4  -0.0315 -0.0979 0.0461   0.8  mean   hdci     
#>  4 m_t:6 - m_t:4   0.537   0.130  1        0.8  mean   hdci     
#>  5 m_t:6 - m_t:5   0.406   0.0333 1        0.8  mean   hdci     
#>  6 m_t:5 - m_t:4   0.131  -0.355  0.740    0.8  mean   hdci     
#>  7 s:6 - s:4       0.440   0.212  0.677    0.8  mean   hdci     
#>  8 s:6 - s:5       0.196  -0.0725 0.417    0.8  mean   hdci     
#>  9 s:5 - s:4       0.243  -0.0114 0.494    0.8  mean   hdci     
#> 10 m_o:6 - m_o:4   0.0150 -0.0565 0.106    0.95 mean   hdci     
#> 11 m_o:6 - m_o:5   0.0465 -0.0506 0.205    0.95 mean   hdci     
#> 12 m_o:5 - m_o:4  -0.0315 -0.211  0.100    0.95 mean   hdci     
#> 13 m_t:6 - m_t:4   0.537  -0.146  1        0.95 mean   hdci     
#> 14 m_t:6 - m_t:5   0.406  -0.190  1        0.95 mean   hdci     
#> 15 m_t:5 - m_t:4   0.131  -0.574  1.00     0.95 mean   hdci     
#> 16 s:6 - s:4       0.440   0.0872 0.803    0.95 mean   hdci     
#> 17 s:6 - s:5       0.196  -0.173  0.606    0.95 mean   hdci     
#> 18 s:5 - s:4       0.243  -0.147  0.655    0.95 mean   hdci

We can also plot the difference distributions plus 80% and 95% credibility intervals. From this it is clear that there is not really any evidence for differences in \(m_o\). However, there is some evidence for a difference in \(m_t\). Furthermore, the monotonic increase for\(s\) also receives some support.

5.3.3.3 Individual-Level Parameters and Individual Differences

We first inspect the posterior distribution of the sigma parameters.

#> # A tibble: 6 x 7
#>   parameter .value  .lower .upper .width .point .interval
#>   <fct>      <dbl>   <dbl>  <dbl>  <dbl> <chr>  <chr>    
#> 1 s           1.27 0.823     1.92   0.8  mode   hdci     
#> 2 m_o         1.18 0.106     1.79   0.8  mode   hdci     
#> 3 m_t         4.97 3.08     10.3    0.8  mode   hdci     
#> 4 s           1.27 0.637     2.55   0.95 mode   hdci     
#> 5 m_o         1.18 0.00296   2.43   0.95 mode   hdci     
#> 6 m_t         4.97 2.22     15.3    0.95 mode   hdci

In addition to the group-level parameters, we can also investigate the distribution of individual-level parameters. For this, we randomly sample 200 of the 4000 posterior samples and plot the density of the individual-level parameter distribution separately per parameter and age group. We plot the individual densities using an alpha-value of 0.05 so that darker regions represent regions with overlap and therefore higher density among the distribution of the individual-level parameters.

To ensure this pattern is not an artifact of applying a density estimator individually to each posterior sample, we can simply plot a histogram for these data, which ignore the individual posterior draws. Should this show a very different pattern, this should provide some caution towards the density estimators. However, the pattern looks pretty much the same as when using the density plot.

For \(s\), we see that the distributiuons of individual-level parameters are a lot less peaked than for the other two parameters. For the 4-year olds we see a small peak around 0 with the remaining probability mass distributed more towards the left side of the parameter space. For the 5-year olds we see a slightly bimodal pattern, but also some evidence for an almost uniform distribution. For the 6-year olds we see almost a mirror pattern of the 4-year olds with a larger peak at 1 and some evidence for a small peak at 0.

For \(m_o\) we can see that the vast majority of individual-level estimates is huddled towards the right boundary of the parameter space. Very few individual-level parameter estimates appear to be below .75. Furthermore, there seem to be few differences between conditions, which is consistent with the pattern of the group-level parameters.

For \(m_t\) we see quite a different pattern suggesting a bimodal distribution with the modes being at either ends of the parameter space. Furthermore, the weight given to the two modes seem to shift across age-groups with the 4 and 5-year olds having a larger peak at 0, whereas for the 6-year olds there appears to be a larger peak at 1. This bimodal pattern is consistent with the large SD of the group-level distribution (posterior mean = 7.56), which is also considerably larger than these SDs for the other two parameters (posterior means < 1.5). The reason for this bimodality given a normal group-level distribution is that the latent-trait model assumes a normal distribution on an unconstrained latent space, which is then transformed into the probability space using the probit transformation.

When comparing the individual-level distributions with the group-level distributions, we see quite a few differences. This suggests that the group-level mean does not necessarily provide a good representation of all individual parameters. One posible reason is parameter trade-offs at the group-level. We investigated this with the next plot.

#> Registered S3 method overwritten by 'GGally':
#>   method from   
#>   +.gg   ggplot2

This plot provides little evidence for large parameter trade-offs. We see some mild positive correlation, \(r = .135\), between \(\mu\) and \(\sigma\) for \(m_o\). Furthermore, a negative correlation of \(r = -.31\) between \(\mu\) and \(\sigma\) of \(s\). Overall this suggests that parameter trade-offs cannot be mainly responsible for the disagreement between individual-level and group-level.

5.3.4 Latent Class Approach

To further explore the individual differences we estimated the MPT model using a latent-trait approach. This approach tries to distribute the observed individual participants into a number of prespecified classes. To identify the number of classes that are supported by the data, we can use both AIC and BIC. As shown below, both AIC and BIC suggest three classes (for AIC, the second best number of classes is 2, whereas for BIC it is 4).

The following shows the class weights for the 3 classes with 95% CIs.

prob_all_3 = read.table(text = "Class1  Class2  Class3
    1  0.778342  0.221118  0.000539
    2  0.933240  0.066713  0.000047
    3  0.993203  0.005675  0.001122
    4  0.111495  0.000016  0.888488
    5  0.995100  0.004091  0.000809
    6  0.988116  0.011810  0.000074
    7  0.608450  0.391549  0.000001
    8  0.909574  0.090363  0.000063
    9  0.021995  0.978005  0.000000
   10  0.002698  0.997302  0.000000
   11  0.933240  0.066713  0.000047
   12  0.949402  0.029961  0.020637
   13  0.021995  0.978005  0.000000
   14  0.669720  0.001689  0.328591
   15  0.005985  0.000001  0.994015
   16  0.005985  0.000001  0.994015
   17  0.998627  0.000494  0.000879
   18  0.157501  0.842499  0.000000
   19  0.005985  0.000001  0.994015
   20  0.669720  0.001689  0.328591
   21  0.998627  0.000494  0.000879
   22  0.219580  0.780413  0.000007
   23  0.980709  0.002679  0.016612
   24  0.002698  0.997302  0.000000
   25  0.831118  0.156878  0.012003
   26  0.005985  0.000001  0.994015
   27  0.032741  0.967258  0.000001
   28  0.002698  0.997302  0.000000
   29  0.005985  0.000001  0.994015
   30  0.005985  0.000001  0.994015
   31  0.827949  0.172047  0.000004
   32  0.157501  0.842499  0.000000
   33  0.778342  0.221118  0.000539
   34  0.973390  0.003696  0.022914
   35  0.627250  0.372721  0.000029
   36  0.716448  0.282862  0.000690
   37  0.111495  0.000016  0.888488
   38  0.157501  0.842499  0.000000
   39  0.953707  0.045298  0.000995
   40  0.168368  0.831624  0.000007
   41  0.990579  0.007866  0.001555
   42  0.002698  0.997302  0.000000
   43  0.005985  0.000001  0.994015
   44  0.909574  0.090363  0.000063
   45  0.015925  0.984075  0.000000
   46  0.082817  0.000017  0.917166
   47  0.998093  0.000686  0.001221
   48  0.527891  0.472108  0.000001
   49  0.527891  0.472108  0.000001
   50  0.953707  0.045298  0.000995
   51  0.082817  0.000017  0.917166
   52  0.953707  0.045298  0.000995
   53  0.700474  0.299502  0.000023
   54  0.745667  0.011250  0.243082
   55  0.669720  0.001689  0.328591
   56  0.993203  0.005675  0.001122
   57  0.983560  0.016337  0.000103
   58  0.119625  0.000144  0.880231
   59  0.005985  0.000001  0.994015
   60  0.005985  0.000001  0.994015
   61  0.652889  0.000198  0.346913
   62  0.082817  0.000017  0.917166
   63  0.005985  0.000001  0.994015
   64  0.005985  0.000001  0.994015
   65  0.997074  0.002769  0.000157
   66  0.652889  0.000198  0.346913
   67  0.778342  0.221118  0.000539
   68  0.002698  0.997302  0.000000
   69  0.219580  0.780413  0.000007
   70  0.981669  0.000323  0.018008
   71  0.219580  0.780413  0.000007
   72  0.980709  0.002679  0.016612")

di3 = bind_cols(di, prob_all_3)

di3 %>% 
  select(participant, Class1:Class3) %>% 
  gather("class", "prob", -participant) %>% 
  group_by(participant) %>% 
  summarize(min_prob = max(prob)) %>% 
  {psych::describe(.$min_prob)} %>% 
  print_table()
#> `summarise()` ungrouping output (override with `.groups` argument)
vars n mean sd median trimmed mad min max range skew kurtosis se
X1 1 72 0.89 0.13 0.95 0.91 0.06 0.53 1 0.47 -1.12 0.12 0.02

We then add the individuals’ posterior probability of class membership to the existing data. Those show a relatively clear pattern, the lowest class membership probability is 0.53 with mean and median at around .9.

#> `summarise()` regrouping output by 'participant', 'age_group', 'months' (override with `.groups` argument)
#>          class
#> age_group Class1 Class2 Class3
#>         4     17      6      1
#>         5     14      6      4
#>         6      8      4     12

If we simply classify participants based on their maximum posterior probability of class membership, we see that the largest class, class 1, has almost equally many members from age 4 and age 5, with a slight majority for age 4. Additionally, still 8 6-year olds are classified into class 1. Class 2 consists of an almost equal amount of members from all age-groups, with only 6-year olds, slightly under-represented. Class 3 finally laregly consists of 6-year olds.

The following figure shows the age in month as a function of class membership. This figure clearly shows that age increases monotonically when going from class 1 to class 3.

Finally, we inspect the parameter estimates as a function of class.

This shows that for class 1, all parameters are estimated with extreme imprecision. For classes 2 and 3, we see very high levels of \(m_o\), a mirror pattern for \(m_t\), and a monotonic increase of \(s\), which is likely clearly above .5 for both classes.

5.4 Plots

5.4.1 Stacked bar chart

df.plot = df.exp2 %>% 
  mutate(answer = factor(answer,
                         labels = c("correct",
                                    "match origin",
                                    "match trajectory",
                                    "match neither")),
         question_index = str_c(outcome_actual, "\n",
                                outcome_counterfactual, "\n",
                                collision),
         question_index = as.factor(question_index)) %>% 
  filter(collision == "collision") %>% 
  count(age_group, answer) %>% 
  group_by(age_group) %>% 
  mutate(proportion = n / sum(n),
         label = str_c(round(proportion, 2) * 100, "%")) %>% 
  ungroup()

df.text = df.exp2 %>% 
  distinct(participant, age_group) %>% 
  count(age_group) %>% 
  mutate(x = 4:6,
         y = 1.07,
         label = str_c("n = ", n))

ggplot(data = df.plot, 
       mapping = aes(x = age_group,
                     y = proportion,
                     fill = answer)) + 
  geom_bar(stat = "identity",
           position = position_fill(reverse = TRUE),
           color = "black") +
  geom_hline(yintercept = 0.25,
             linetype = 2,
             size = 1,
             color = "gray20") +
  geom_text(mapping = aes(label = label),
            color = "black",
            # fontface = "bold",
            size = 5,
            position = position_stack(vjust = .5,
                                      reverse = TRUE)) +
  annotate(geom = "text",
           x = df.text$x,
           y = df.text$y,
           label = df.text$label,
           size = 8) + 
  annotate(geom = "text",
           x = rep(6.5, 4),
           y = df.plot %>% 
             filter(age_group == 6) %>% 
             select(proportion) %>% 
             mutate(cum_prop = cumsum(proportion),
                    lag = lag(cum_prop, 1, default = 0),
                    y = (cum_prop + lag) / 2) %>% 
             pull(y),
           label = c("correct",
                     "match origin",
                     "match trajectory",
                     "match neither"),
           hjust = 0,
           size = 7) + 
  labs(y = "% of responses") +
  scale_y_continuous(breaks = seq(0, 1, 0.25),
                     labels = str_c(seq(0, 100, 25), "%"),
                     expand = expansion(mult = 0)) +
  scale_x_continuous(breaks = 4:6,
                     labels = str_c(4:6, " years"),
                     expand = expansion(add = 0.1)) + 
  coord_cartesian(clip = "off") + 
  scale_fill_manual(values = c(rgb(252, 0, 8, maxColorValue = 255),
                               rgb(254, 255, 13, maxColorValue = 255),
                               rgb(135, 0, 197, maxColorValue = 255),
                               rgb(53, 91, 183, maxColorValue = 255))) +
  theme(text = element_text(size = 30),
        legend.position = "none",
        axis.title.y = element_blank(),
        axis.title.x = element_blank(),
        legend.title = element_blank(),
        plot.margin = margin(t = 0.5,
                             r = 5,
                             b = 0,
                             l = 0,
                             unit = "cm"))

ggsave("../figures//experiment2_bars.pdf",
       width = 8,
       height = 6)

Let’s also look at these results split up by singly-determined and over-determined

df.plot_singly_determined = df.exp2 %>% 
  mutate(answer = factor(answer,
                         labels = c("correct",
                                    "match origin",
                                    "match trajectory",
                                    "match neither")),
         question_index = str_c(outcome_actual, "\n",
                                outcome_counterfactual, "\n",
                                collision),
         question_index = as.factor(question_index)) %>% 
  filter(collision == "collision") %>% 
  filter(outcome_counterfactual=="different") %>%
  count(age_group, answer) %>%
  group_by(age_group) %>% 
  mutate(proportion = n / sum(n),
         label = str_c(round(proportion, 2) * 100, "%")) %>% 
  ungroup()


ggplot(data = df.plot_singly_determined, 
       mapping = aes(x = age_group,
                     y = proportion,
                     fill = answer)) + 
  ggtitle("Responses to singly-determined items") +
  geom_bar(stat = "identity",
           position = position_fill(reverse = TRUE),
           color = "black") +
  geom_hline(yintercept = 0.25,
             linetype = 2,
             size = 1,
             color = "gray20") +
  geom_text(mapping = aes(label = label),
            color = "black",
            # fontface = "bold",
            size = 5,
            position = position_stack(vjust = .5,
                                      reverse = TRUE)) +
  annotate(geom = "text",
           x = df.text$x,
           y = df.text$y,
           label = df.text$label,
           size = 8) + 
  annotate(geom = "text",
           x = rep(6.5, 4),
           y = df.plot %>% 
             filter(age_group == 6) %>% 
             select(proportion) %>% 
             mutate(cum_prop = cumsum(proportion),
                    lag = lag(cum_prop, 1, default = 0),
                    y = (cum_prop + lag) / 2) %>% 
             pull(y),
           label = c("correct",
                     "match origin",
                     "match trajectory",
                     "match neither"),
           hjust = 0,
           size = 7) + 
  labs(y = "% of responses") +
  scale_y_continuous(breaks = seq(0, 1, 0.25),
                     labels = str_c(seq(0, 100, 25), "%"),
                     expand = expansion(mult = 0)) +
  scale_x_continuous(breaks = 4:6,
                     labels = str_c(4:6, " years"),
                     expand = expansion(add = 0.1)) + 
  coord_cartesian(clip = "off") + 
  scale_fill_manual(values = c(rgb(252, 0, 8, maxColorValue = 255),
                               rgb(254, 255, 13, maxColorValue = 255),
                               rgb(135, 0, 197, maxColorValue = 255),
                               rgb(53, 91, 183, maxColorValue = 255))) +
  theme(text = element_text(size = 30),
        legend.position = "none",
        axis.title.y = element_blank(),
        axis.title.x = element_blank(),
        legend.title = element_blank(),
        plot.margin = margin(t = 0.5,
                             r = 5,
                             b = 0,
                             l = 0,
                             unit = "cm"))

ggsave("../figures//experiment2__singlydetermined_bars.pdf",
       width = 8,
       height = 6)

df.plot_over_determined = df.exp2 %>% 
  mutate(answer = factor(answer,
                         labels = c("correct",
                                    "match origin",
                                    "match trajectory",
                                    "match neither")),
         question_index = str_c(outcome_actual, "\n",
                                outcome_counterfactual, "\n",
                                collision),
         question_index = as.factor(question_index)) %>% 
  filter(collision == "collision") %>% 
  filter(outcome_counterfactual=="same") %>%
  count(age_group, answer) %>%
  group_by(age_group) %>% 
  mutate(proportion = n / sum(n),
         label = str_c(round(proportion, 2) * 100, "%")) %>% 
  ungroup()


ggplot(data = df.plot_over_determined, 
       mapping = aes(x = age_group,
                     y = proportion,
                     fill = answer)) + 
  ggtitle("Responses to over-determined items") +
  geom_bar(stat = "identity",
           position = position_fill(reverse = TRUE),
           color = "black") +
  geom_hline(yintercept = 0.25,
             linetype = 2,
             size = 1,
             color = "gray20") +
  geom_text(mapping = aes(label = label),
            color = "black",
            # fontface = "bold",
            size = 5,
            position = position_stack(vjust = .5,
                                      reverse = TRUE)) +
  annotate(geom = "text",
           x = df.text$x,
           y = df.text$y,
           label = df.text$label,
           size = 8) + 
  annotate(geom = "text",
           x = rep(6.5, 4),
           y = df.plot %>% 
             filter(age_group == 6) %>% 
             select(proportion) %>% 
             mutate(cum_prop = cumsum(proportion),
                    lag = lag(cum_prop, 1, default = 0),
                    y = (cum_prop + lag) / 2) %>% 
             pull(y),
           label = c("correct",
                     "match origin",
                     "match trajectory",
                     "match neither"),
           hjust = 0,
           size = 7) + 
  labs(y = "% of responses") +
  scale_y_continuous(breaks = seq(0, 1, 0.25),
                     labels = str_c(seq(0, 100, 25), "%"),
                     expand = expansion(mult = 0)) +
  scale_x_continuous(breaks = 4:6,
                     labels = str_c(4:6, " years"),
                     expand = expansion(add = 0.1)) + 
  coord_cartesian(clip = "off") + 
  scale_fill_manual(values = c(rgb(252, 0, 8, maxColorValue = 255),
                               rgb(254, 255, 13, maxColorValue = 255),
                               rgb(135, 0, 197, maxColorValue = 255),
                               rgb(53, 91, 183, maxColorValue = 255))) +
  theme(text = element_text(size = 30),
        legend.position = "none",
        axis.title.y = element_blank(),
        axis.title.x = element_blank(),
        legend.title = element_blank(),
        plot.margin = margin(t = 0.5,
                             r = 5,
                             b = 0,
                             l = 0,
                             unit = "cm"))

ggsave("../figures//experiment2_overdetermined_bars.pdf",
       width = 8,
       height = 6)

### Group-level parameteres

The following plot shows the posterior distribution of the group-level parameters, separated by age-group.

#> Joining, by = c(".chain", ".iteration", ".draw", "parameter")

6 Experiment 3

6.2 Stats

6.2.1 Binomial test

estimate statistic p.value parameter conf.low conf.high method alternative
0.667 20 0.099 30 0.472 0.827 Exact binomial test two.sided

6.2.3 MPT analysis

First, we get a simple point estimate.

#> [1] "Model fitting begins at 2020-09-22 09:54:28"
#> [1] "Model fitting stopped at 2020-09-22 09:54:28"
#> Time difference of 0.003087997 secs
#>   Log.Likelihood     G.Squared df p.value
#> 1      -19.09543 -7.105427e-15  0       1
#>   estimates  lower.conf upper.conf
#> s 0.3333333 -0.00404035   0.670707

However, to be comparable with the previous analyses, we need to conduct a Bayesian analysis of the aggregated data. Because we only have one trial per child, we opted for the Bayesian model on the aggregated data in this case. This assumes that individual variability is consistent with the variability of a multinomial distribution.

#> Call: 
#> simpleMPT(eqnfile = "models/model3.eqn", data = data.agg2, restrictions = list("g=0.5"), 
#>     n.chains = 4)
#> 
#>         Mean    SD  2.5%   50% 97.5% Time-series SE n.eff  Rhat R_95%
#> mean_s 0.324 0.149 0.036 0.324 0.608             NA    NA 1.001 1.007
#> 
#> 
#> ##############
#>  Model fit statistics (posterior predictive p-values):
#> Use PPP(fittedModel) to get T1 and T2 posterior predictive checks.
#> List of 7
#>  $ summary  :List of 6
#>   ..$ groupParameters      :List of 6
#>   ..$ individParameters    : num [1, 1, 1:9] 0.3242 0.1491 0.0356 0.324 0.6075 ...
#>   .. ..- attr(*, "dimnames")=List of 3
#>   ..$ fitStatistics        : NULL
#>   ..$ transformedParameters: NULL
#>   ..$ call                 : chr "(summarizeMPT called manually)"
#>   ..$ round                : num 3
#>   ..- attr(*, "class")= chr "summary.simpleMPT"
#>  $ mptInfo  :List of 15
#>   ..$ model                : chr "simpleMPT"
#>   ..$ thetaNames           :'data.frame':    1 obs. of  2 variables:
#>   ..$ thetaUnique          : chr "s"
#>   ..$ thetaFixed           : NULL
#>   ..$ MPT                  :List of 13
#>   ..$ eqnfile              : chr "models/model3.eqn"
#>   ..$ data                 :'data.frame':    1 obs. of  2 variables:
#>   ..$ restrictions         :List of 1
#>   ..$ covData              : NULL
#>   ..$ corProbit            : logi TRUE
#>   ..$ predTable            : NULL
#>   ..$ predFactorLevels     : NULL
#>   ..$ predType             : NULL
#>   ..$ transformedParameters: NULL
#>   ..$ hyperprior           :List of 2
#>  $ mcmc.summ: num [1:3, 1:9] 0.324 NaN 0.324 0.149 NA ...
#>   ..- attr(*, "dimnames")=List of 2
#>  $ runjags  :List of 1
#>   ..$ mcmc:List of 4
#>   .. ..- attr(*, "class")= chr "mcmc.list"
#>  $ postpred : NULL
#>  $ call     : language simpleMPT(eqnfile = "models/model3.eqn", data = data.agg2, restrictions = list("g=0.5"),      n.chains = 4)
#>  $ time     : 'difftime' num 0.102246046066284
#>   ..- attr(*, "units")= chr "secs"
#>  - attr(*, "class")= chr "simpleMPT"
#> [[1]]
#> Markov Chain Monte Carlo (MCMC) output:
#> Start = 501 
#> End = 519 
#> Thinning interval = 3 
#>           mean sd theta[1,1]
#> [1,] 0.3324378 NA  0.3324378
#> [2,] 0.2463047 NA  0.2463047
#> [3,] 0.3111365 NA  0.3111365
#> [4,] 0.2834747 NA  0.2834747
#> [5,] 0.2719397 NA  0.2719397
#> [6,] 0.5487529 NA  0.5487529
#> [7,] 0.1183583 NA  0.1183583
#> 
#> [[2]]
#> Markov Chain Monte Carlo (MCMC) output:
#> Start = 501 
#> End = 519 
#> Thinning interval = 3 
#>            mean sd theta[1,1]
#> [1,] 0.07536277 NA 0.07536277
#> [2,] 0.15911382 NA 0.15911382
#> [3,] 0.12745682 NA 0.12745682
#> [4,] 0.40226190 NA 0.40226190
#> [5,] 0.56770968 NA 0.56770968
#> [6,] 0.45337213 NA 0.45337213
#> [7,] 0.51434658 NA 0.51434658
#> 
#> [[3]]
#> Markov Chain Monte Carlo (MCMC) output:
#> Start = 501 
#> End = 519 
#> Thinning interval = 3 
#>            mean sd theta[1,1]
#> [1,] 0.09352926 NA 0.09352926
#> [2,] 0.28385651 NA 0.28385651
#> [3,] 0.31751668 NA 0.31751668
#> [4,] 0.51264184 NA 0.51264184
#> [5,] 0.33866738 NA 0.33866738
#> [6,] 0.20132874 NA 0.20132874
#> [7,] 0.23585196 NA 0.23585196
#> 
#> [[4]]
#> Markov Chain Monte Carlo (MCMC) output:
#> Start = 501 
#> End = 519 
#> Thinning interval = 3 
#>           mean sd theta[1,1]
#> [1,] 0.2834439 NA  0.2834439
#> [2,] 0.3789799 NA  0.3789799
#> [3,] 0.3292678 NA  0.3292678
#> [4,] 0.3402195 NA  0.3402195
#> [5,] 0.6652232 NA  0.6652232
#> [6,] 0.3816096 NA  0.3816096
#> [7,] 0.2482009 NA  0.2482009
#> 
#> attr(,"class")
#> [1] "mcmc.list"

Here are the mode and mean of the estimate of s, and their 80% HDIs, based on this analysis.

Mode (reported, for comparison w/Exp 2 4-year-olds):

#> Warning: 'mode_hdcih' is deprecated.
#> Use 'mode_hdci' instead.
#> See help("Deprecated") and help("tidybayes-deprecated").
#> # A tibble: 1 x 7
#>   .variable .value .lower .upper .width .point .interval
#>   <chr>      <dbl>  <dbl>  <dbl>  <dbl> <chr>  <chr>    
#> 1 mean       0.326  0.125  0.519    0.8 mode   hdci

Mean:

#> Warning: 'mean_hdcih' is deprecated.
#> Use 'mean_hdci' instead.
#> See help("Deprecated") and help("tidybayes-deprecated").
#> # A tibble: 1 x 7
#>   .variable .value .lower .upper .width .point .interval
#>   <chr>      <dbl>  <dbl>  <dbl>  <dbl> <chr>  <chr>    
#> 1 mean       0.324  0.125  0.519    0.8 mean   hdci

7 Session info

#> R version 3.6.0 (2019-04-26)
#> Platform: x86_64-apple-darwin15.6.0 (64-bit)
#> Running under: macOS High Sierra 10.13.6
#> 
#> Matrix products: default
#> BLAS:   /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
#> LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
#> 
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#>  [1] forcats_0.5.0          stringr_1.4.0          dplyr_1.0.2           
#>  [4] purrr_0.3.4            readr_1.3.1            tidyr_1.1.2           
#>  [7] tibble_3.0.3           ggplot2_3.3.2          tidyverse_1.3.0       
#> [10] afex_0.28-0            lme4_1.1-23            MPTinR_1.13.0         
#> [13] ggridges_0.5.2         tidybayes_2.1.1        TreeBUGS_1.4.5        
#> [16] BayesFactor_0.9.12-4.2 Matrix_1.2-18          coda_0.19-3           
#> [19] ggeffects_0.16.0       broom_0.7.0            xtable_1.8-4          
#> [22] kableExtra_1.2.1       knitr_1.29             janitor_2.0.1         
#> 
#> loaded via a namespace (and not attached):
#>   [1] readxl_1.3.1         backports_1.1.10     Hmisc_4.4-1         
#>   [4] plyr_1.8.6           splines_3.6.0        svUnit_1.0.3        
#>   [7] elliptic_1.4-0       digest_0.6.25        htmltools_0.5.0     
#>  [10] lmerTest_3.1-2       fansi_0.4.1          checkmate_2.0.0     
#>  [13] magrittr_1.5         contfrac_1.1-12      cluster_2.1.0       
#>  [16] openxlsx_4.2.2       modelr_0.1.8         jpeg_0.1-8.1        
#>  [19] colorspace_1.4-1     blob_1.2.1           rvest_0.3.6         
#>  [22] ggdist_2.2.0         haven_2.3.1          xfun_0.17           
#>  [25] crayon_1.3.4         jsonlite_1.7.1       survival_3.2-3      
#>  [28] glue_1.4.2           gtable_0.3.0         emmeans_1.5.1       
#>  [31] webshot_0.5.2        MatrixModels_0.4-1   distributional_0.2.0
#>  [34] car_3.0-9            abind_1.4-5          scales_1.1.1        
#>  [37] mvtnorm_1.1-1        GGally_2.0.0         DBI_1.1.0           
#>  [40] Rcpp_1.0.5           viridisLite_0.3.0    htmlTable_2.1.0     
#>  [43] tmvnsim_1.0-2        HDInterval_0.2.2     foreign_0.8-71      
#>  [46] deSolve_1.28         Formula_1.2-3        htmlwidgets_1.5.1   
#>  [49] httr_1.4.2           arrayhelpers_1.1-0   RColorBrewer_1.1-2  
#>  [52] runjags_2.0.4-6      logspline_2.1.16     ellipsis_0.3.1      
#>  [55] reshape_0.8.8        pkgconfig_2.0.3      farver_2.0.3        
#>  [58] nnet_7.3-14          dbplyr_1.4.4         utf8_1.1.4          
#>  [61] labeling_0.3         tidyselect_1.1.0     rlang_0.4.7         
#>  [64] reshape2_1.4.4       munsell_0.5.0        cellranger_1.1.0    
#>  [67] tools_3.6.0          cli_2.0.2            generics_0.0.2      
#>  [70] sjlabelled_1.1.6     evaluate_0.14        yaml_2.2.1          
#>  [73] fs_1.5.0             zip_2.1.1            hypergeo_1.2-13     
#>  [76] pbapply_1.4-3        nlme_3.1-149         xml2_1.3.2          
#>  [79] compiler_3.6.0       rstudioapi_0.11      curl_4.3            
#>  [82] png_0.1-7            reprex_0.3.0         statmod_1.4.34      
#>  [85] stringi_1.5.3        highr_0.8            Brobdingnag_1.2-6   
#>  [88] lattice_0.20-41      psych_2.0.8          nloptr_1.2.2.2      
#>  [91] vctrs_0.3.4          pillar_1.4.6         lifecycle_0.2.0     
#>  [94] estimability_1.3     data.table_1.13.0    insight_0.9.6       
#>  [97] R6_2.4.1             latticeExtra_0.6-29  bookdown_0.20       
#> [100] gridExtra_2.3        rio_0.5.16           rjags_4-10          
#> [103] boot_1.3-25          MASS_7.3-53          gtools_3.8.2        
#> [106] assertthat_0.2.1     withr_2.2.0          mnormt_2.0.2        
#> [109] parallel_3.6.0       hms_0.5.3            grid_3.6.0          
#> [112] rpart_4.1-15         minqa_1.2.4          rmarkdown_2.3       
#> [115] snakecase_0.11.0     carData_3.0-4        numDeriv_2016.8-1.1 
#> [118] lubridate_1.7.9      base64enc_0.1-3